User-specific Audio Rendering and Steerable Sound for Distributed Virtual Environments
نویسندگان
چکیده
We present a method for user-specific audio rendering of a virtual environment that is shared by multiple participants. The technique differs from methods such as amplitude differencing, HRTF filtering, and wave field synthesis. Instead we model virtual microphones within the 3-D scene, each of which captures audio to be rendered to a loudspeaker. Spatialization of sound sources is accomplished via acoustic physical modelling, yet our approach also allows for localized signal processing within the scene. In order to control the flow of sound within the scene, the user has the ability to steer audio in specific directions. This paradigm leads to many novel applications where groups of individuals can share one continuous interactive sonic space. [
منابع مشابه
Rendering environmental voice reverberation or large-scale distributed virtual worlds
We propose an approach that can render environmental audio effects for a large number of concurrent voice users immersed in a large distributed virtual world. In an off-line step, our approach efficiently computes acoustic similarity measures based on average path length, reflection direction and diffusion throughout the environment. The similarity measures are used to adaptively decompose the ...
متن کاملVirtual Reality System at RWTH Aachen University
During the last decade, Virtual Reality (VR) systems have progressed from primary laboratory experiments into serious and valuable tools. Thereby, the amount of useful applications has grown in a large scale, covering conventional use, e.g., in science, design, medicine and engineering, as well as more visionary applications such as creating virtual spaces that aim to act real. However, the hig...
متن کاملDevelopment of an Audio - Visual Saliency Map
General Presentation of the Research Domain The focus of the REVES research group is on image and sound synthesis for virtual environments. Our research is on the development of new algorithms to treat complex scenes in real time, both for image rendering (for example the capture and rendering of trees using an image-based technique [1]) or for sound (for example using perceptual masking and cl...
متن کاملAudiogarden: towards a Usable Tool for Composite Audio Creation
This project presents a new approach to sound composition for soundtrack composers and sound designers. We propose a tool for usable sound manipulation and composition that targets sound variety and expressive rendering of the composition. We first automatically segment audio recordings into atomic grains which are displayed on our navigation tool according to their timbre. To perform the synth...
متن کاملAudio-Visual Perception in Interactive Virtual Environments
Interactive virtual environments (VEs) are gaining more and more fidelity. Their high quality stimuli undoubtedly increase the feeling of presence and immersion as “being in the world”, but maybe they also affect user’s performance on specific tasks. Vision and spatial hearing are the main contributors of our perception. Sight dominates clearly and has been in the focus of research for a long t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007